3 research outputs found

    A comparative analysis of automatic deep neural networks for image retrieval

    Get PDF
    Feature descriptor and similarity measures are the two core components in content-based image retrieval and crucial issues due to “semantic gap” between human conceptual meaning and a machine low-level feature. Recently, deep learning techniques have shown a great interest in image recognition especially in extracting features information about the images. In this paper, we investigated, compared, and evaluated different deep convolutional neural networks and their applications for image classification and automatic image retrieval. The approaches are: simple convolutional neural network, AlexNet, GoogleNet, ResNet-50, Vgg-16, and Vgg-19. We compared the performance of the different approaches to prior works in this domain by using known accuracy metrics and analyzed the differences between the approaches. The performances of these approaches are investigated using public image datasets corel 1K, corel 10K, and Caltech 256. Hence, we deduced that GoogleNet approach yields the best overall results. In addition, we investigated and compared different similarity measures. Based on exhausted mentioned investigations, we developed a novel algorithm for image retrieval

    SPECTRAL ALGORITHM FOR CONTENT-BASED IMAGE RETRIEVAL

    Get PDF
    Colour images are rich in visual information. The process of searching for the most similar images in large-scale database based on visual features of query image is still a challenge in Content-Based Image Retrieval (CBIR) due to a semantic gap issue. In this paper, we proposed a fusing retrieval method to diminish the gap between high-level and low-level meanings by involving two aspects. The first aspect is increasing the effectiveness of image representation. Hence, data-level fusion features were suggested, a local feature from Discrete Cosine Transform (DCT) and Local Binary Patterns (LBP) in frequency and spatial domains respectively that was applied by a spectral clustering algorithm (graph-based) in addition to a global weighted LBP feature. The second aspect is fusing multiple retrieved similarity measures (scores/evidences) obtained from above global (LBP) and local features (DCTLBP) in terms of score-level fusion. The method is evaluated in WANG standard publically dataset

    Chest radiographs images retrieval using deep learning networks

    No full text
    Chest diseases are among the most common diseases today. More than one million people with pneumonia enter the hospital, and about 50,000 people die annually in the U.S. alone. Also, Coronavirus disease (COVID-19) is a risky disease that threatens the health by affecting the lungs of many people around the world. Chest X-ray and CT-scan images are the radiological imaging that can be helpful to detect COVID-19. A radiologist would need to compare a patient's image with the most similar images. Content-based image retrieval in terms of medical images offers such a facility based on visual feature descriptor and similarity measurements. In this paper, a retrieval algorithm was developed to tackle such challenges based on deep convolutional neural networks (e.g., ResNet-50, AlexNet, and GoogleNet) to produce an effective feature descriptor. Also, similarity measures such as City block and Cosine were employed to compare two images. Chest X-ray and CT-scan datasets used to evaluate the proposed algorithms with a highest performance applying ResNet -50 (99% COVID-19 (+) and 98% COVID-19 (–)) and GoogleNet (84% COVID-19 (+) and 81% COVID-19 (– )) for X-ray and CT-scan respectively. The percentage increased about 1-4% when voting was used by a k-nearest neighbor classifier
    corecore